home *** CD-ROM | disk | FTP | other *** search
- > Date: Thu, 26 Mar 92 18:06:51 GMT-0500
- > From: Linda Murphy <murphy@dairp.upenn.edu>
-
- > My environment: a NeXT.
- > ...I am running pre-release b of version 0.13 of the WWW application.
- > I get ... Invalid access prefix for 'telnet://info.cern.ch'
- > Can be one of news: but not 'telnet:'.
-
-
- The NeXT client was frozen last summer, so it has no innovations which occured
- since then. I have it relatively high on my (rather large) agenda to bring it
- up-to-date from a prototype to include the re-engineerd common library, and
- rerelease it. (I was under pressure here not to put too much work into it because
- NeXTs were not official CERN workstations for a while: everyone wanted X. Now X is
- coming from other people, the pressure should ease.)
-
- Another problem you will find is that the NeXT client can't cope with the very long
- identifiers returned by the latest WAIS servers such as the directory of servers.
- It just crashes becuase I put in an arbitrary hard limit (bad! :-().
-
- Apart from telnet: it also can't handle gopher:.
-
- I'm really sorry for that lack of functionality. I use the app all the time myself
- so it bugs me too. I'll do it when I have time, but better server functionality I
- think should come first. Someone in Hawaii whose mail address I don't seem to have
- (are you out there?) thought they might find six person-months to put into the NeXT
- app which would be great,
-
- > Further down in the bug list, you mention serialisation and web
- > traversal. What do you mean by this term?
- >
-
- > --lam
-
- I mean a feature to turn a web into a serial document, like to print it on paper,
- by traversing the web. This is really needed -- the world is looking for ways of
- turning text into hypertext, but the moment you do it, you want to turn it back
- again for people who want paper! A traversal, and concatenation followed by a sed
- file to turn it into TeX macros should cover it.
-
- Also one imagines tools which traverse the web recursively in a breadth-first way
- looking for things -- interesting data and indexes for example. I image terminating
- the search as a function of number of links traversed modified by the
- "interestingness" of documents found on the way (judged by the words the contain
- matched against a query). This is a step toward a "knowbot"-like tool for resource
- discovery. Now we have a real web to play with, we can start making such machines
- in earnest.
-
- Tim
-
-
-
-